75 research outputs found

    Universal Robotic Gripper based on the Jamming of Granular Material

    Full text link
    Gripping and holding of objects are key tasks for robotic manipulators. The development of universal grippers able to pick up unfamiliar objects of widely varying shape and surface properties remains, however, challenging. Most current designs are based on the multi-fingered hand, but this approach introduces hardware and software complexities. These include large numbers of controllable joints, the need for force sensing if objects are to be handled securely without crushing them, and the computational overhead to decide how much stress each finger should apply and where. Here we demonstrate a completely different approach to a universal gripper. Individual fingers are replaced by a single mass of granular material that, when pressed onto a target object, flows around it and conforms to its shape. Upon application of a vacuum the granular material contracts and hardens quickly to pinch and hold the object without requiring sensory feedback. We find that volume changes of less than 0.5% suffice to grip objects reliably and hold them with forces exceeding many times their weight. We show that the operating principle is the ability of granular materials to transition between an unjammed, deformable state and a jammed state with solid-like rigidity. We delineate three separate mechanisms, friction, suction and interlocking, that contribute to the gripping force. Using a simple model we relate each of them to the mechanical strength of the jammed state. This opens up new possibilities for the design of simple, yet highly adaptive systems that excel at fast gripping of complex objects.Comment: 10 pages, 7 figure

    Bias in trials comparing paired continuous tests can cause researchers to choose the wrong screening modality

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To compare the diagnostic accuracy of two continuous screening tests, a common approach is to test the difference between the areas under the receiver operating characteristic (ROC) curves. After study participants are screened with both screening tests, the disease status is determined as accurately as possible, either by an invasive, sensitive and specific secondary test, or by a less invasive, but less sensitive approach. For most participants, disease status is approximated through the less sensitive approach. The invasive test must be limited to the fraction of the participants whose results on either or both screening tests exceed a threshold of suspicion, or who develop signs and symptoms of the disease after the initial screening tests.</p> <p>The limitations of this study design lead to a bias in the ROC curves we call <it>paired screening trial bias</it>. This bias reflects the synergistic effects of inappropriate reference standard bias, differential verification bias, and partial verification bias. The absence of a gold reference standard leads to inappropriate reference standard bias. When different reference standards are used to ascertain disease status, it creates differential verification bias. When only suspicious screening test scores trigger a sensitive and specific secondary test, the result is a form of partial verification bias.</p> <p>Methods</p> <p>For paired screening tests with bivariate normally distributed scores, we give formulae and programs to quantify the effect of <it>paired screening trial bias </it>on a paired comparison of area under the curves. We fix the prevalence of disease, and the chance a diseased subject manifests signs and symptoms. We derive the formulas for true sensitivity and specificity, and those for the sensitivity and specificity observed by the study investigator.</p> <p>Results</p> <p>The observed area under the ROC curves is quite different from the true area under the ROC curves. The typical direction of the bias is a strong inflation in sensitivity, paired with a concomitant slight deflation of specificity.</p> <p>Conclusion</p> <p>In paired trials of screening tests, when area under the ROC curve is used as the metric, bias may lead researchers to make the wrong decision as to which screening test is better.</p

    The insecure airway: a comparison of knots and commercial devices for securing endotracheal tubes

    Get PDF
    BACKGROUND: Endotracheal Tubes (ETTs) are commonly secured using adhesive tape, cloth tape, or commercial devices. The objectives of the study were (1) To compare degrees of movement of ETTs secured with 6 different commercial devices and (2) To compare movement of ETTs secured with cloth tape tied with 3 different knots (hitches). METHODS: A 17 cm diameter PVC tube with 14 mm "mouth" hole in the side served as a mannequin. ETTs were subjected to repeated jerks, using a cable and pulley system. Measurements: (1) Total movement of ETTs relative to "mouth" (measure used for devices) (2) Slippage of ETT through securing knot (measure used for knots). RESULTS: Among commercial devices, the Dale(® )showed less movement than other devices, although some differences between devices did not reach significance. Among knots, Magnus and Clove Hitches produced less slippage than the Cow Hitch, but these differences did not reach statistical significance. CONCLUSION: Among devices tested, the Dale(® )was most secure. Within the scope offered by the small sample sizes, there were no statistically significant differences between the knots in this study

    Dual-Labeling Strategies for Nuclear and Fluorescence Molecular Imaging: A Review and Analysis

    Get PDF
    Molecular imaging is used for the detection of biochemical processes through the development of target-specific contrast agents. Separately, modalities such as nuclear and near-infrared fluorescence (NIRF) imaging have been shown to non-invasively monitor disease. More recently, merging of these modalities has shown promise owing to their comparable detection sensitivity and benefited from the development of dual-labeled imaging agents. Dual-labeled agents hold promise for whole-body and intraoperative imaging and could bridge the gap between surgical planning and image-guided resection with a single, molecularly targeted agent. In this review, we summarized the literature for dual-labeled antibodies and peptides that have been developed and have highlighted key considerations for incorporating NIRF dyes into nuclear labeling strategies. We also summarized our findings on several commercially available NIRF dyes and offer perspectives for developing a toolkit to select the optimal NIRF dye and radiometal combination for multimodality imaging

    Middle East - North Africa and the millennium development goals : implications for German development cooperation

    Get PDF
              Closed-loop controlled combustion is a promising technique to improve the overall performance of internal combustion engines and Diesel engines in particular. In order for this technique to be implemented some form of feedback from the combustion process is required. The feedback signal is processed and from it combustionrelated parameters are computed. These parameters are then fed to a control process which drives a series of outputs (e.g. injection timing in Diesel engines) to control their values. This paper’s focus lies on the processing and computation that is needed on the feedback signal before this is ready to be fed to the control process as well as on the electronics necessary to support it. A number of feedback alternatives are briefly discussed and for one of them, the in-cylinder pressure sensor, the CA50 (crank angle in which the integrated heat release curve reaches its 50% value) and the IMEP (Indicated Mean Effective Pressure) are identified as two potential control variables. The hardware architecture of a system capable of calculating both of them on-line is proposed and necessary feasibility size and speed considerations are made by implementing critical blocks in VHDL targeting a flash-based Actel ProASIC3 automotive-grade FPGA

    The role of scientific uncertainty in compliance with the Kyoto Protocol to the climate change convention

    No full text
    Under the climate change treaties, developed countries are under a quantitative obligation to limit their emissions of greenhouse gases (GHG). This paper argues that although the climate change regime is setting up various measures and mechanisms, there will still be significant uncertainty about the actual emission reductions and the effectiveness of the regime will depend largely on how countries actually implement their obligations in practice. These uncertainties arise from the calculation of emissions from each source, the tallying up these emissions, adding or deducting changes due to land use change and forestry (LUCF) and finally from subtracting or adding emission reduction units (ERUs). Further, it points to the problem of uncertainty in the reductions as opposed to the uncertainty in the inventories themselves. The protocols have temporarily opted to deal with these problems through harmonisation in reporting methodologies and to seek transparency by calling on parties involved to use specific guidelines and to report on their uncertainty. This paper concludes that this harmonisation of reporting methodologies does not account for regional differences and that while transparency will indicate when countries are adopting strategies that have high uncertainty; it will not help to increase the effectiveness of the protocol. Uncertainty about compliance then becomes a critical issue. This paper proposes to reduce this uncertainty in compliance by setting a minimum requirement for the probability of compliance. © 2003 Elsevier Ltd. All rights reserved

    Indoor A* Pathfinding through an Octree Representation of a Point Cloud

    No full text
    There is a growing demand of 3D indoor pathfinding applications. Researched in the field of robotics during the last decades of the 20th century, these methods focussed on 2D navigation. Nowadays we would like to have the ability to help people navigate inside buildings or send a drone inside a building when this is too dangerous for people. What these examples have in common is that an object with a certain geometry needs to find an optimal collision free path between a start and goal point.This paper presents a new workflow for pathfinding through an octree representation of a point cloud. We applied the following steps: 1) the point cloud is processed so it fits best in an octree; 2) during the octree generation the interior empty nodes are filtered and further processed; 3) for each interior empty node the distance to the closest occupied node directly under it is computed; 4) a network graph is computed for all empty nodes; 5) the A* pathfinding algorithm is conducted.This workflow takes into account the connectivity for each node to all possible neighbours (face, edge and vertex and all sizes). Besides, a collision avoidance system is pre-processed in two steps: first, the clearance of each empty node is computed, and then the maximal crossing value between two empty neighbouring nodes is computed. The clearance is used to select interior empty nodes of appropriate size and the maximal crossing value is used to filter the network graph. Finally, both these datasets are used in A* pathfinding.OLD Department of GIS TechnologyUrban Data Scienc

    INDOOR A* PATHFINDING THROUGH AN OCTREE REPRESENTATION OF A POINT CLOUD

    No full text
    There is a growing demand of 3D indoor pathfinding applications. Researched in the field of robotics during the last decades of the 20th century, these methods focussed on 2D navigation. Nowadays we would like to have the ability to help people navigate inside buildings or send a drone inside a building when this is too dangerous for people. What these examples have in common is that an object with a certain geometry needs to find an optimal collision free path between a start and goal point. This paper presents a new workflow for pathfinding through an octree representation of a point cloud. We applied the following steps: 1) the point cloud is processed so it fits best in an octree; 2) during the octree generation the interior empty nodes are filtered and further processed; 3) for each interior empty node the distance to the closest occupied node directly under it is computed; 4) a network graph is computed for all empty nodes; 5) the A* pathfinding algorithm is conducted. This workflow takes into account the connectivity for each node to all possible neighbours (face, edge and vertex and all sizes). Besides, a collision avoidance system is pre-processed in two steps: first, the clearance of each empty node is computed, and then the maximal crossing value between two empty neighbouring nodes is computed. The clearance is used to select interior empty nodes of appropriate size and the maximal crossing value is used to filter the network graph. Finally, both these datasets are used in A* pathfinding

    Indoor A* Pathfinding through an Octree Representation of a Point Cloud

    No full text
    There is a growing demand of 3D indoor pathfinding applications. Researched in the field of robotics during the last decades of the 20th century, these methods focussed on 2D navigation. Nowadays we would like to have the ability to help people navigate inside buildings or send a drone inside a building when this is too dangerous for people. What these examples have in common is that an object with a certain geometry needs to find an optimal collision free path between a start and goal point.This paper presents a new workflow for pathfinding through an octree representation of a point cloud. We applied the following steps: 1) the point cloud is processed so it fits best in an octree; 2) during the octree generation the interior empty nodes are filtered and further processed; 3) for each interior empty node the distance to the closest occupied node directly under it is computed; 4) a network graph is computed for all empty nodes; 5) the A* pathfinding algorithm is conducted.This workflow takes into account the connectivity for each node to all possible neighbours (face, edge and vertex and all sizes). Besides, a collision avoidance system is pre-processed in two steps: first, the clearance of each empty node is computed, and then the maximal crossing value between two empty neighbouring nodes is computed. The clearance is used to select interior empty nodes of appropriate size and the maximal crossing value is used to filter the network graph. Finally, both these datasets are used in A* pathfinding
    corecore